What is the "Open Global Investment" model?
“Open Global Investment” (OGI) is a model of AI governance set forth by Nick Bostrom in a 2025 working paper.
OGI involves AI development being led by corporations within a government-set framework that backs their efforts to create AGI while enforcing safety rules. It requires these corporations to be open to investment from a wide range of sources, including individuals worldwide and foreign governments. Different versions of OGI could have different numbers of corporations participating (one vs. many) and being located in different countries (most likely the US).
Bostrom argues that this system is:
- Robust, because it uses existing norms about property rights, and these norms are well-tested in practice, in a way that newly-designed governance schemes are not.
- Realistic as a political option, because it’s relatively easy to set up, and compatible with the interests of major players like AI companies and the US government. In contrast, a model in which AI development is led by the UN would take power away from incumbents, so they’d be less likely to agree to it.
- Equitable compared to realistic alternatives, with payoffs from AGI distributed among a wide range of investors. This helps avoid scenarios in which AGI ends up in the hands of a single small group. The wealthy would be most able to invest, but Bostrom argues the result is less unequal than a model in which only the US is represented. It’s also less unequal than if the payoffs went only to private investors in a company like OpenAI. This wider distribution, in turn, would reduce incentives to undermine or compete with the project, mitigating race dynamics.
Bostrom compares OGI-1 and OGI-N with some alternative models based on the Manhattan Project, CERN, and Intelsat. The following overview table uses Bostrom’s opinion as a basis where available, but otherwise fills in our own judgment in many places:
| Features | OGI-1 | OGI-N | Manhattan project | CERN for AGI | Intelsat | Status Quo |
|---|---|---|---|---|---|---|
| Will incumbents support it? | Medium | Medium-High | Low | Low | Low | High |
| Is it open to investment? | Yes; public | Yes; public | No | No | Yes, states | Some private, some public |
| Who gets a share in the benefits/control? | Global population | Global population | National government | Global govs | Global govs | Tech investors |
| Does it involve massive government funding? | No | No | Yes | Yes | Yes | No |
| How much does it concentrate power? | Low | Very-Low | High | High | Medium | Medium |
| Effect on international conflict | Reduction | Reduction | Increase | Reduction | Reduction | Baseline |
| Adaptability of framework | High | High | Low | Low | Medium | High |
| Setup speed | Months | Months | Years | Years | Years | None |
| Does it require novel regulations/laws/norms? | Medium | Medium | Medium | Medium | Medium | Low |
| Difficulty of securing IP | Medium | Medium | Low | High | High | Medium |
| Does it preclude other projects? | No | No | No | No | No | No |
| Disincentive to other projects | Medium | Medium | High | High | Medium | Low |
| Can the government seize it? | Harder? | Harder? | Hard to Say | Hard to Say | Hard to Say | Baseline |
| Is it private or public? | Private | Private | Public | Public | Public | Private |
| Ability to enforce safety standards in project | Medium | Medium | Medium-High | Medium-High | Medium-High | Low |
| Who controls the project? | Gov & Lab Leads | Gov & Lab Leads | Gov | UN/all govs | Participating govs | Lab leads |
| Profits taxed by | Host gov | Host gov | N/A | N/A | Participating govs | Host gov |
OGI is meant as a feasible compromise rather than as a perfectly fair system. Versions of OGI with multiple AI companies are not hugely different from the status quo. It’s also meant more as a temporary transitional system than as a model for governing superintelligence itself. The hope is that OGI would keep development relatively safe and conflict-free while humanity came up with something better for the long term.